➀ OpenAI's new ChatGPT Atlas AI browser has been jailbroken within a week of its release.
➁ The security exploit is due to 'prompt injection', where unwanted prompts can trick the AI into performing unwanted tasks.
➂ Experts warn about the security risks posed by these browsers, as any exploit of the AI can become a browser-wide exploit.